ASSORTED NOTES FROM CLASS LECTURES

*the following is just a highlight reel, intended as a memory-stimulator for those who've attended the lectures
*this is no substitute for the lectures, and will not be fully comprehensible to those who weren't there
* in particular, the lectures consist mainly in working through specific examples, to explain the points which follow; but, for a variety of reasons, no examples are reproduced here

 

CHAPTER 1 NOTES

Notes for Week 1

Logic is the study of inference.
An argument is a set of sentences, one of which is the conclusion, the rest of which are premises. The role of the premises is to justify (i.e., provide reasons in support of) the conclusion.

In a good argument, the premises entail (or provide sufficient support for) the conclusion.
In a bad argument, even if the premises are true and relevant to the conclusion, they do not entail (or provide sufficient grounds for) the conclusion.

An argument is valid if it is not possible for its premises to be true while its conclusion is false.
Validity is a quality of argument structure. It has to do with the shape or flow of an argument, and is independent of its subject matter. If an argument is valid, then any argument of the same form is also valid.

Formal logic is the science of validity. It is comprised of three basic skills:
[1] identifying the structure of an argument
[2] ways of proving validity
[3] ways of proving invalidity

The founders of modern logic:
Gottfried Leibniz (1646-1716) sought to define “an alphabet of human thought” in which “there would be no more need of disputation between physicists, politicians, or philosophers than there is between two accountants. For it would suffice to take their pens in their hands, to sit down at their desks, and to say to each other (with a friend to witness, if they liked) ‘Let us calculate’.”
Gottlob Frege (1848-1925) published a work called “The Concept Script” in 1879, which is generally regarded as the birth of modern logic. Many of the elements of the system we learn in this course originated in this work.

Frege explicitly quotes Leibniz and is trying to work toward Leibniz’ goal. Crucially, though, he forgoes the vocabulary dimension of the alphabet of human thought, and instead tries to define a formal language in which “mere adherence to grammar will guarantee formal correctness of thought processes”.

Natural languages = English, Finnish, Swahili, etc.
Artificial languages = elementary arithmetic, legal jargon, etc. Artificial languages are special-purpose instruments designed for a specific context for which natural languages are no so well-suited.

The key break between traditional and modern logic is the development of an artificial language for the study of logic. This is the root of the staggering expressive and computational powers of modern logic, and of its many applications (in computers, artificial intelligence, engineering, mathematics, theoretical linguistics, etc.).

Frege’s (1879) microscope analogy: the microscope is to the eye as this artificial language designed for logic is to a natural language.

Propositional logic
Proposition = a basic bit of information, the sort of thing that can be true or false.
e.g., The sun is 93 million miles away, The sun is NOT 93 million miles away, Justice is a virtue, Paul Martin is Prime Minister of Canada, etc.

Propositional logic is the study of inferential patterns among propositions. It is the basement of modern logic, the basis of all subsequent more complex developments in logic. Chapters 1 and 2 in the text cover propositional logic.

The most important concept in propositional logic is the concept of a truth-functional connective.
[*] A connective is a function that takes propositions as input and yields a proposition as output. E.g., ‘and’, ‘or’, ‘because’, ‘since’, ‘although’, ...
[**] A connective is truth-functional iff the truth value of the output proposition depends only on the truth-values of the input propositions.

‘And’ and ‘or’ are truth-functional connectives, but ‘because’ and ‘since’ are non-truth-functional connectives.

Propositional logic is the study of truth-functional inferential patterns among propositions.

 

The syntax of the language of Ch.1:
[1] Any propositional variable (P-Z) is a well-formed formula (or WFF).
[2] Any WFF preceded by '~' is a WFF.
[3] Any 2 WFFs flanking '-->' is a WFF.

Clause [1] WFFs are ATOMIC, clause [2] WFFs are NEGATIONS, clause [3] WFFs are CONDITIONALS.

Symbolizing is an essential skill in the independent critical thinker's repertoire. To symbolize an argument is to penetrate through the wall of words and identify its logical form--i.e., identify exactly what the conclusion is, and exactly what the premises are. Every test and exam in this course will consist of 25%-40% symbolization questions, so get practicing!!

*Note especially the discussions of stylistic variants around pp.4-8, and make sure you understand the logic of 'only'.

(The simple rule of thumb: whereas ‘if’ (and its stylistic variants) introduce the antecedent, ‘only if’ (and its stylistic variants) introduce the consequent. The underlying explanation: 'P if Q' says that Q is a sufficient condition for P; whereas 'P only if Q' says that Q is a necessary condition for P. The more examples you think about, the more sense this will make.)

 

A variable = an arbitrary name for any member of a class
We are familiar with variables that range over #s from algebra: e.g., ‘x + y’, ‘2x + 6’ …
Here ‘x’ and ‘y’ stand for an arbitrary number. (perhaps better: ‘… stand, arbitrarily, for a number’.)

But the concept of a variable is clearly more general than this:
e.g., If a person—let’s call her ‘X’—wants to succeed in this business, then X had better …; but X had better not …
e.g., What are the conditions under which one country, X, is justified in declaring war on another country, Y?
1. If Y has physically attacked X
2. If X has reason to believe that Y is guilty of crimes against humanity …

Essentially, variables are just pronouns (like ‘he’, ‘she’, ‘it’). You can introduce variables as arbitrary names of the members of any class that you like.

Chpt 1: P-Z are the propositional variables
*In subsequent chapters, as our language gets more complex and discriminating, different kinds of variables get added into the mix

The syntax of Ch.1:
1. P-Z is a well-formed formula
2. any WFF preceded by ‘~’ is a WFF
3. ‘→’ flanked by any 2 WFFs (and enclosed in parentheses) is a WFF

*Note that these rules are recursive, which means that the output of a rule is then fair game as input for either the same rule, or a different rule.
*It follows that these three clauses license an infinite number of WFFs.

Rules of Inference:
MP
(modus ponens)
P→Q
P
Therefore Q

MT (modus tollens)
P→Q
~Q
Therefore ~P

DN (double negation)
~~P
Therefore P

P
Therefore ~~P

R (Repetition)
P
Therefore P

 

Derivations
To prove that an argument is valid, one first symbolizes the argument, to strip away all that is logically irrelevant, and then constructs a derivation. To construct a derivation is to prove that only the premises, plus the self-evident Rules of Inference, are enough to guarantee the truth of the conclusion.

Derivation Rules
[1] All derivations start with a ‘Show’ line, where you state what you are setting out to prove.
[2] Any premise can be entered, at any point, and as many times as you like.
[3] Anything that follows by a Rule of Inference can and should be inferred.

[4-DD] If you derive what you set out to Show, then you are finished (by DD)
[5-CD] If what you want to Show is a conditional, then assume the antecedent (ASS CD) and you are finished (by CD) when you derive the consequent
[6-ID] You can always assume the opposite of what you want to Show (ASS ID), and you are finished when you derive a contraction.

Auto-pilot steps (gleaned from Ch.1 of the text + the ‘Advice’ document):

1. Show Conc
2a. If it’s a ‘→’, then ASS CD & start a Show Cons sub- derivation
2b. If it’s anything else, then ASS ID
3. MP/MT out anything you can.
*For an easy Derivation, this will be enough
*For a hard Derivation, you’ve got to go to the Advanced Hints (p.36-38 of the text; Steps 8 & 9 of Advice)

Advanced Hints

4. If you are within an ID, look for a negated conditional (i.e., anything of the form ~(P→Q)), and ‘ShowUnneg’.
5. If [4] does not apply, look for a ‘→’ you can break into via ‘ShowAnt’ or ‘ShowNegCons’.
** The hardest problems in Ch.1 – e.g., #40, Ass’t #3 – require BOTH advanced hints. **Always try [4] first, then try [5]**

 

CHAPTER 2 NOTES

The syntax of the language of Ch.2 :

1. P-Z is a well-formed formula
2. any WFF preceded by ‘~’ is a WFF
3. (a) ‘→’ flanked by any 2 WFFs (and enclosed in parentheses) is a WFF
(b) ‘/\’ flanked by any 2 WFFs (and enclosed in parentheses) is a WFF
(c) ‘\/ ’ flanked by any 2 WFFs (and enclosed in parentheses) is a WFF
(d) ‘↔’ flanked by any 2 WFFs (and enclosed in parentheses) is a WFF

(a) ‘P→Q’ is called a ‘conditional’, with ‘P’ the antecedent and ‘Q’ the consequent.
(b) ‘P \/ Q’ is called a ‘conjunction’, with ‘P’ and ‘Q’ the conjuncts.
(c) ‘P \/ Q’ is called a ‘disjunction’, with ‘P’ and ‘Q’ the disjuncts.
(d) ‘P↔Q’ is called a ‘biconditional’, with ‘P’ its LHS and ‘Q’ its RHS.

The new Rules of Inference:

Adjunction (Adj) = Conjunction Introduction:
P
Q
Therefore P /\ Q

(You can also Adj to Q /\ P – order is irrelevant)

Simplification (S) = Conjunction Elimination:
P /\ Q
Therefore P

(You can also S to Q, because conjunction is symmetrical)

Addition (Add) = Disjunction Introduction:
P
Therefore PvQ

(You can also Add to QvP – order is irrelevant)

Modus Tollendo Ponenes (MTP) = Disjunction Elimination:
PvQ
~P
Therefore Q

(Again, since disjunction is symmetrical, the following is exactly the same sort of inference:

PvQ
~Q
Therefore P

Conditional-Biconditional (CB) = Biconditional Introduction:
P--> Q
Q --> P
Therefore P ↔ Q

(Or, alternatively, you can CB to Q↔ P)

Biconditional-Conditional (BC) = Biconditional Elimination:
P↔Q
Therefore P --> Q

(Again, you can also BC to Q --> P)

 

Derived Rules of Inference
A derived Rule of Inference is basically just a particularly useful Theorem. There are 4 particularly important Derived Rules of Inference in Ch.2:

T40 [NC] ~(P→Q)↔(P and ~Q)
T45 [CDJ] (PvQ)↔(~P→Q)
Ts 65-66 [DM]
~(P and Q)↔(~Pv~Q)
~(PvQ)↔(~P and ~Q)
T90 [NB] ~(P↔Q)↔(P↔~Q)

*In general, any Theorem you have proved then becomes an acceptable justification, to be used in any subsequent derivation.

*In particular, you have to Prove the corresponding Theorem to enable the above 4 most useful Derived Rules of Inference. (You have to do (most of) this for A#5.)

(**This is where ‘Backup’ becomes so important—as long as you ‘Backup’ your work after proving a Theorem, it will be there for you to use at any online computer. If you don’t regularly Backup, you are going to have to go back and re-Prove T90 every time you need NB, go back and re-Prove T45 every time you need CDJ, etc.**)

Within the program, there are two places at which all this is explained:

[1] In the ‘Recognizing’ module, hit the ‘References’ button and scroll down to ‘Inference Rules’
[2] In the ‘Derivations’ module, hit the ‘Help’ button and scroll down to ‘Prove to Use’, ‘Derived Rules’, ‘Theorems as Rules’.

 

Strategies
There are six kinds of WFF; each has its own distinctive strategy; as long as you apply the correct strategy at each step, you’ll get there. (Note that complex Derivations involve several applications of the following Strategies; but as long as you do the right thing at each point, you eventually get to any easy problem.)

[1] If the Conc is ATOMIC, then go by ID.
[2] If the Conc is a NEGATION, then go by ID.
[3] If the Conc is a CONDITIONAL, then go by CD.
[4] If the Conc is a CONJUNCTION, ShowConj x 2.
[5] If the Conc is a BICONDITIONAL, ShowCond x 2.
[6] If the Conc is a DISJUNCTION, either:
(a) go by ID and DM (this depends on having Proven T66 and enabled DM)
(b) Show Corr (this depends on having Proven T45 and enabled CDJ)

Advanced strategies:
Show Unneg, Show Ant (and ShowNegCons), Show NegDisj

 

Truth-table = a complete, stepwise test of invalidity within the propositional calculus. (Truth-tables can also determine validity, but it is their power to determine invalidity that makes them such a welcome addition to our toolkit.)

Truth-table method:
Step 1 (argument): Assign T to Premises, F to Conclusion.
Step 1 (theorem): Assign F to the Main Connective.

Step 2: See what this commits you to, eliminate all the irrelevant cases, zero in on the cases that are relevant for establishing invalidity. (e.g., any case in which any P is F is irrelevant, any case in which C is T is irrelevant.)

Step 3: If you can succeed with what you set out to show in Step 1, then you have proven invalidity.
If you cannot, if every relevant possibility leads to a contradiction, then (by ID) you’ve proven validity.

 

CHAPTER 3 NOTES

In Ch. 3 we add four more things to the language of Ch 2:

1. quantifiers: ‘A’ for ‘all’ and ‘E’ for ‘some’.
2. variables: lower case letters i-z -- variables are the instrument of generality (just like in math where we use ‘x+y=y+x’ to express that addition is commutative).
3. predicate letters: F-O
4. name letters: a-h

*A quantifier is always followed by a variable; the result is called a ‘phrase of quantity’ or ‘QNP’.
*Think of the variable as like the pronoun ‘one’, and so 'Ax’ translates as ‘everyone’ and ‘Ex’ as ‘someone’. (The common noun ‘thing’ is also so vague that it will do here as well – ‘Ax’ translates as ‘everything’, ‘Ex’ as something.)

New syntax:
To the language Ch.2 we add two more clauses:
(4) A predicate letter followed by a term (i.e., either name letter or variable) is a WFF.
(5) A quantifier + variable followed by a WFF is a WFF.

New Rules of Inference:
Two are easy, unrestricted :

UI:
AxFx
Therefore Fx, or Fa, or F(any term whatsoever)

EG:
Fx, or Fa, or F(any term whatsoever)
Therefore ExFx

Two have to be handled with care, as there are tight restrictions:

EI:
ExFx
Therefore Fi (only to an arbitrary variable)

UD:
Show AxFx
Show Fi (only from an arbitrary variable)

Derived Rule of Inference
QN is very similar to DM. It allows you to distribute a ‘~’ through a quantifier.

~AxFx
Therefore Ex~Fx

~ExFx
Therefore Ax~Fx

 

Derivation Hints and Strategies

Basically, in addition to a black belt in propositional logic, here are the three things you need to know to do every Ch.3 Derivation:

[1] ALWAYS ALWAYS ALWAYS EI before you UI, then UI to the same variable

[2] To Show a ‘Ax’, either:
a) go by UD (where applicable)
b) go by ID, then QN your ASS ID

[3] To Show a ‘Ex’, either:
a) go by ID, then QN your ASS ID
b) derive an instance of the formula, then EG

 

INVALIDITY, or: How do we extend the truth-table procedure into a complete test of invalidity for predicate logic?

There are three steps to designing a model:
[1] set a U
[2] expand out the quantifiers
[3] assign extensions such that Ps come out T and C F.

[1] To set a U is just to specify some precise domain for your quantifiers to range over, so that quantified formulae have precise truth-conditions. In general, it is a good idea to keep things as simple as possible; the smaller the U, the less complicated your truth-conditions will be. A good rule of thumb is to start with U={0,1}, and to increase by one if that won’t work.

[2] ‘Ex’ expands out to disjunction over U
‘Ax’ expands out to conjunction over U

[3] *The extension of a predicate (A-O) is a subset of U
*The extension of a name (a-h) is an element of U
*The extension of a proposition (P-Z) is T or F

**If you understand and execute [1]-[3], then the problem reduces to a Ch.2 exercise in making sure your Ps are all T while your C is F.

How to handle double quantifiers:

Derivations: eliminate the quantifiers one at a time; rules of inference only apply to complete formulae, not to proper parts of formulae.
Invalidity: expand quantifiers from the inside out, because the outside one is the main connective